Goto

Collaborating Authors

 article 8


An explainable approach to detect case law on housing and eviction issues within the HUDOC database

Mohammadi, Mohammad, Wieling, Martijn, Vols, Michel

arXiv.org Artificial Intelligence

Case law is instrumental in shaping our understanding of human rights, including the right to adequate housing. The HUDOC database provides access to the textual content of case law from the European Court of Human Rights (ECtHR), along with some metadata. While this metadata includes valuable information, such as the application number and the articles addressed in a case, it often lacks detailed substantive insights, such as the specific issues a case covers. This underscores the need for detailed analysis to extract such information. However, given the size of the database - containing over 40,000 cases - an automated solution is essential. In this study, we focus on the right to adequate housing and aim to build models to detect cases related to housing and eviction issues. Our experiments show that the resulting models not only provide performance comparable to more sophisticated approaches but are also interpretable, offering explanations for their decisions by highlighting the most influential words. The application of these models led to the identification of new cases that were initially overlooked during data collection. This suggests that NLP approaches can be effectively applied to categorise case law based on the specific issues they address.


Combining topic modelling and citation network analysis to study case law from the European Court on Human Rights on the right to respect for private and family life

Mohammadi, M., Bruijn, L. M., Wieling, M., Vols, M.

arXiv.org Artificial Intelligence

Case law plays a crucial role in legal research, particularly in the context of human rights. Many international human rights conventions, such as the European Convention on Human Rights (ECHR), are considered'living instruments', which means that human rights should be interpreted in light of present-day conditions and in accordance with developments in international law [1]. Fundamental human rights, such as the right to respect for private and family life, home, and correspondence as enshrined in Article 8 of the ECHR, serve as broad normative standards that (may) evolve in response to societal changes and international consensus. For example, the meaning of'correspondence' has significantly changed with the internet and the progression of technology, and also what is considered'family life' [2] or a'home' is ever-developing [3]. Consequently, the interpretation and application of human rights undergo continuous development, requiring legal scholars and practitioners to rely heavily on the case law established by international courts, such as the European Court of Human Rights (ECtHR). However, the volume of case law is ever-increasing, which makes it challenging for legal scholars to discover relevant cases and gain a comprehensive understanding of this vast amount of information.


On the Role of Negative Precedent in Legal Outcome Prediction

Valvoda, Josef, Cotterell, Ryan, Teufel, Simone

arXiv.org Artificial Intelligence

Every legal case sets a precedent by developing the law in one of the following two ways. It either expands its scope, in which case it sets positive precedent, or it narrows it, in which case it sets negative precedent. Legal outcome prediction, the prediction of positive outcome, is an increasingly popular task in AI. In contrast, we turn our focus to negative outcomes here, and introduce a new task of negative outcome prediction. We discover an asymmetry in existing models' ability to predict positive and negative outcomes. Where the state-of-the-art outcome prediction model we used predicts positive outcomes at 75.06 F1, it predicts negative outcomes at only 10.09 F1, worse than a random baseline. To address this performance gap, we develop two new models inspired by the dynamics of a court process. Our first model significantly improves positive outcome prediction score to 77.15 F1 and our second model more than doubles the negative outcome prediction performance to 24.01 F1. Despite this improvement, shifting focus to negative outcomes reveals that there is still much room for improvement for outcome prediction models.


Court finds some fault with UK police force's use of facial recognition tech – TechCrunch

#artificialintelligence

Civil rights campaigners in the UK have won a legal challenge to South Wales Police's (SWP) use of facial recognition technology. The win on appeal is being hailed as a "world-first" victory in the fight against the use of an "oppressive surveillance tool", as human rights group Liberty puts it. However the police force does not intend to appeal the ruling -- and has said it remains committed to "careful" use of the tech. The back story here is SWP has been trialing automated facial recognition (AFR) technology since 2017, deploying a system known as AFR Locate on around 50 occasions between May 2017 and April 2019 at a variety of public events in Wales. The force has used the technology in conjunction with watchlists of between 400-800 people -- which included persons wanted on warrants; persons who had escaped from custody; persons suspected of having committed crimes; persons who may be in need of protection; vulnerable persons; persons of possible interest to it for intelligence purposes; and persons whose presence at a particular event causes particular concern, per a press summary issued by the appeals court.


Supreme Court to rule on 'paedophile hunters' case

BBC News

A convicted paedophile who was snared by a vigilante group is to have his case examined at the UK Supreme Court. Judges at the UK's highest court will consider whether prosecutions based on the covert operations of "paedophile hunters" breach the right to privacy. Mark Sutherland, 37, believed he was communicating with a 13-year-old boy on the dating app Grindr. But in reality it was a 48-year-old man who was part of a group called Groom Resisters Scotland. The Supreme Court will hold a virtual hearing to consider the case and will issue its judgement later.


Healthcare Robots and the Right to Privacy

VideoLectures.NET

The paper reveals author's personal conclusions derived from the fact that an increasing autonomy of robots is not a science fiction, yet it presents a notorious feature of modern era that requires a comprehensive and systematic legal approach. However, a European Parliaments' recently issued recommendation to consider robots as electronic persons seems inappropriate from human rights perspective and may reflect in serious violations of fundamental rights attached to all human beings. This article focuses on negative aftermaths of automaton and the impact they have on health law and the right to privacy. The fundamental principle of healthcare ethics, a protection of patient's clinical records presents a cornerstone of doctor-patient confidential relationship. The latter is, due to its importance, protected not only by national health legislations, yet also by Article 8 of the European Convention on Human Rights, Right to privacy.